Video understanding is a growing field and a subject of intense research, which includes many interesting tasks to understanding both spatial and temporal information, e.g., action detection, action recognition, video captioning, video retrieval. One of the most challenging problems in video understanding is dealing with feature extraction, i.e. extract contextual visual representation from given untrimmed video due to the long and complicated temporal structure of unconstrained videos. Different from existing approaches, which apply a pre-trained backbone network as a black-box to extract visual representation, our approach aims to extract the most contextual information with an explainable mechanism. As we observed, humans typically perceive a video through the interactions between three main factors, i.e., the actors, the relevant objects, and the surrounding environment. Therefore, it is very crucial to design a contextual explainable video representation extraction that can capture each of such factors and model the relationships between them. In this paper, we discuss approaches, that incorporate the human perception process into modeling actors, objects, and the environment. We choose video paragraph captioning and temporal action detection to illustrate the effectiveness of human perception based-contextual representation in video understanding. Source code is publicly available at https://github.com/UARK-AICV/Video_Representation.
translated by 谷歌翻译
Video anomaly detection (VAD) -- commonly formulated as a multiple-instance learning problem in a weakly-supervised manner due to its labor-intensive nature -- is a challenging problem in video surveillance where the frames of anomaly need to be localized in an untrimmed video. In this paper, we first propose to utilize the ViT-encoded visual features from CLIP, in contrast with the conventional C3D or I3D features in the domain, to efficiently extract discriminative representations in the novel technique. We then model long- and short-range temporal dependencies and nominate the snippets of interest by leveraging our proposed Temporal Self-Attention (TSA). The ablation study conducted on each component confirms its effectiveness in the problem, and the extensive experiments show that our proposed CLIP-TSA outperforms the existing state-of-the-art (SOTA) methods by a large margin on two commonly-used benchmark datasets in the VAD problem (UCF-Crime and ShanghaiTech Campus). The source code will be made publicly available upon acceptance.
translated by 谷歌翻译
Video paragraph captioning aims to generate a multi-sentence description of an untrimmed video with several temporal event locations in coherent storytelling. Following the human perception process, where the scene is effectively understood by decomposing it into visual (e.g. human, animal) and non-visual components (e.g. action, relations) under the mutual influence of vision and language, we first propose a visual-linguistic (VL) feature. In the proposed VL feature, the scene is modeled by three modalities including (i) a global visual environment; (ii) local visual main agents; (iii) linguistic scene elements. We then introduce an autoregressive Transformer-in-Transformer (TinT) to simultaneously capture the semantic coherence of intra- and inter-event contents within a video. Finally, we present a new VL contrastive loss function to guarantee learnt embedding features are matched with the captions semantics. Comprehensive experiments and extensive ablation studies on ActivityNet Captions and YouCookII datasets show that the proposed Visual-Linguistic Transformer-in-Transform (VLTinT) outperforms prior state-of-the-art methods on accuracy and diversity.
translated by 谷歌翻译
在本文中,我们利用涉及视觉和语言互动的人类感知过程来生成对未修剪视频的连贯段落描述。我们提出了视觉语言(VL)功能,这些功能由两种模态组成,即(i)视觉方式,以捕获整个场景的全局视觉内容以及(ii)语言方式来提取人类和非人类对象的场景元素描述(例如,动物,车辆等),视觉和非视觉元素(例如关系,活动等)。此外,我们建议在对比度学习VL损失下培训我们提出的VLCAP。有关活动网字幕和YouCookii数据集的实验和消融研究表明,我们的VLCAP在准确性和多样性指标上都优于现有的SOTA方法。
translated by 谷歌翻译
Telework "avatar work," in which people with disabilities can engage in physical work such as customer service, is being implemented in society. In order to enable avatar work in a variety of occupations, we propose a mobile sales system using a mobile frozen drink machine and an avatar robot "OriHime", focusing on mobile customer service like peddling. The effect of the peddling by the system on the customers are examined based on the results of video annotation.
translated by 谷歌翻译
我们建议基于负担能力识别和一种神经远期模型的组合来预测负担执行的效果的新型动作序列计划。通过对预测期货进行负担能力识别,我们避免依赖多步计划的明确负担效果定义。由于该系统从经验数据中学习负担能力效果,因此该系统不仅可以预见到负担的规范效应,还可以预见到特定情况的副作用。这使系统能够避免由于这种非规范效应而避免计划故障,并可以利用非规范效应来实现给定目标。我们在一组需要考虑规范和非典型负担效应的测试任务上评估了模拟系统的系统。
translated by 谷歌翻译
This article presents our generative model for rhythm action games together with applications in business operations. Rhythm action games are video games in which the player is challenged to issue commands at the right timings during a music session. The timings are rendered in the chart, which consists of visual symbols, called notes, flying through the screen. We introduce our deep generative model, Gen\'eLive!, which outperforms the state-of-the-art model by taking into account musical structures through beats and temporal scales. Thanks to its favorable performance, Gen\'eLive! was put into operation at KLab Inc., a Japan-based video game developer, and reduced the business cost of chart generation by as much as half. The application target included the phenomenal "Love Live!," which has more than 10 million users across Asia and beyond, and is one of the few rhythm action franchises that has led the online era of the genre. In this article, we evaluate the generative performance of Gen\'eLive! using production datasets at KLab as well as open datasets for reproducibility, while the model continues to operate in their business. Our code and the model, tuned and trained using a supercomputer, are publicly available.
translated by 谷歌翻译
本文提出了一种用于端到端现场文本识别的新颖培训方法。端到端的场景文本识别提供高识别精度,尤其是在使用基于变压器的编码器 - 解码器模型时。要培训高度准确的端到端模型,我们需要为目标语言准备一个大型图像到文本配对数据集。但是,很难收集这些数据,特别是对于资源差的语言。为了克服这种困难,我们所提出的方法利用富裕的大型数据集,以资源丰富的语言,如英语,培训资源差的编码器解码器模型。我们的主要思想是建立一个模型,其中编码器反映了多种语言的知识,而解码器专门从事资源差的语言。为此,所提出的方法通过使用组合资源贫乏语言数据集和资源丰富的语言数据集的多语言数据集来预先培训编码器,以学习用于场景文本识别的语言不变知识。所提出的方法还通过使用资源贫乏语言的数据集预先列举解码器,使解码器更适合资源较差的语言。使用小型公共数据集进行日本现场文本识别的实验证明了该方法的有效性。
translated by 谷歌翻译
本文提出了一种用于对话序列标记的新型知识蒸馏方法。对话序列标签是监督的学习任务,估计目标对话文档中每个话语的标签,并且对于许多诸如对话法估计的许多应用是有用的。准确的标签通常通过分层结构化的大型模型来实现,这些大型模型组成的话语级和对话级网络,分别捕获话语内和话语之间的上下文。但是,由于其型号大小,因此无法在资源受限设备上部署此类模型。为了克服这种困难,我们专注于通过蒸馏了大型和高性能教师模型的知识来列举一个小型模型的知识蒸馏。我们的主要思想是蒸馏知识,同时保持教师模型捕获的复杂环境。为此,所提出的方法,等级知识蒸馏,通过蒸馏来列举小型模型,而不是通过培训模型在教师模型中培训的话语水平和对话级环境的知识模拟教师模型在每个级别的输出。对话法案估算和呼叫场景分割的实验证明了该方法的有效性。
translated by 谷歌翻译
为了在老年人的日常生活中实现连续的虚弱护理,我们向家里的老年人提出Ahobo,一位虚弱的护理机器人。通过AHOBO实施两种类型的支持系统,以支持身体健康和心理方面的老年人。对于身体健康的体力保健,我们专注于血压,并开发了一种用Ahobo血压测量的支持系统。对于心理脆弱的护理,我们将用Ahobo作为与机器人的娱乐活动实施着色的着色。根据日常生活中连续使用的假设,评估系统的可用性。对于血压测量的支持系统,我们对16名受试者的问卷进行了定性评估,包括系统血压测量的老年人。结果证实,该拟议的机器人不会影响血压读数,并且在基于主观评估的易用性方面是可接受的。为了使复兴的着色相互作用,在口头流畅性任务下对两名老年人进行了主观评估,并且已经证实了互动可以在日常生活中不断使用。拟议的机器人作为支持日常生活的AI的界面广泛使用将导致AI机器人支持从摇篮到坟墓的社会。
translated by 谷歌翻译